political philosophy
Lecture I: Governing the Algorithmic City
A century ago, John Dewey observed that '[s]team and electricity have done more to alter the conditions under which men associate together than all the agencies which affected human relationships before our time'. In the last few decades, computing technologies have had a similar effect. Political philosophy's central task is to help us decide how to live together, by analysing our social relations, diagnosing their failings, and articulating ideals to guide their revision. But these profound social changes have left scarcely a dent in the model of social relations that (analytical) political philosophers assume. This essay aims to reverse that trend. It first builds a model of our novel social relations as they are now, and as they are likely to evolved, and then explores how those differences affect our theories of how to live together. I introduce the 'Algorithmic City', the network of algorithmically-mediated social relations, then characterise the intermediary power by which it is governed. I show how algorithmic governance raises new challenges for political philosophy concerning the justification of authority, the foundations of procedural legitimacy, and the possibility of justificatory neutrality.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York > New York County > New York City (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- (15 more...)
- Media (1.00)
- Law (1.00)
- Information Technology > Services (1.00)
- (4 more...)
Sure, why not: China built a chatbot based on Xi Jinping
Why not try a conversation with the leader of China? There's a new chatbot in town and it's based on Xi Jinping. As a matter of fact, it was trained using the'thoughts' of the Chinese leader. I put thoughts in quotes because researchers didn't use some kind of new mind-reading technology. Chinese officials just used a bunch of his books and papers for training purposes, according to a report by The Financial Times.
Attention is all they need: Cognitive science and the (techno)political economy of attention in humans and machines
de la Torre, Pablo González, Pérez-Verdugo, Marta, Barandiaran, Xabier E.
This paper critically analyses the "attention economy" within the framework of cognitive science and techno-political economics, as applied to both human and machine interactions. We explore how current business models, particularly in digital platform capitalism, harness user engagement by strategically shaping attentional patterns. These platforms utilize advanced AI and massive data analytics to enhance user engagement, creating a cycle of attention capture and data extraction. We review contemporary (neuro)cognitive theories of attention and platform engagement design techniques and criticize classical cognitivist and behaviourist theories for their inadequacies in addressing the potential harms of such engagement on user autonomy and wellbeing. 4E approaches to cognitive science, instead, emphasizing the embodied, extended, enactive, and ecological aspects of cognition, offer us an intrinsic normative standpoint and a more integrated understanding of how attentional patterns are actively constituted by adaptive digital environments. By examining the precarious nature of habit formation in digital contexts, we reveal the techno-economic underpinnings that threaten personal autonomy by disaggregating habits away from the individual, into an AI managed collection of behavioural patterns. Our current predicament suggests the necessity of a paradigm shift towards an ecology of attention. This shift aims to foster environments that respect and preserve human cognitive and social capacities, countering the exploitative tendencies of cognitive capitalism.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Minnesota (0.04)
- (5 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Government (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.92)
Taming the Terminator: Law, ethics and artificial intelligence
Seth Lazar is a Professor in the School of Philosophy at the ANU, lead CI on the ARC grant'Ethics and Risk', director of a Templeton World Charity Foundation project on'Moral Skill and Artificial Intelligence', and project leader of the major interdisciplinary research project: Humanising Machine Intelligence. In 2019, he was awarded the ANU Vice Chancellor's award for excellence in research. A central focus of his early work on the ethics of war was the necessity of taking an approach more grounded in political philosophy than in moral philosophy--the same redirection is necessary for work on the morality, law and politics of data and AI. He is also an Area Editor at Ergo, an editor of Philosophers' Imprint, and on the editorial board of Oxford Studies in Political Philosophy.
Whither Fair Clustering?
Within the relatively busy area of fair machine learning that has been dominated by classification fairness research, fairness in clustering has started to see some recent attention. In this position paper, we assess the existing work in fair clustering and observe that there are several directions that are yet to be explored, and postulate that the state-of-the-art in fair clustering has been quite parochial in outlook. We posit that widening the normative principles to target for, characterizing shortfalls where the target cannot be achieved fully, and making use of knowledge of downstream processes can significantly widen the scope of research in fair clustering research. At a time when clustering and unsupervised learning are being increasingly used to make and influence decisions that matter significantly to human lives, we believe that widening the ambit of fair clustering is of immense significance.
- North America > United States > New York (0.04)
- North America > United States > California > Alameda County > Oakland (0.04)
- Asia > India (0.04)
Algorithmic Fairness from a Non-ideal Perspective
Fazelpour, Sina, Lipton, Zachary C.
Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a variety of algorithms in attempts to satisfy subsets of these parities or to trade off the degree to which they are satisfied against utility. In this paper, we connect this approach to \emph{fair machine learning} to the literature on ideal and non-ideal methodological approaches in political philosophy. The ideal approach requires positing the principles according to which a just world would operate. In the most straightforward application of ideal theory, one supports a proposed policy by arguing that it closes a discrepancy between the real and the perfectly just world. However, by failing to account for the mechanisms by which our non-ideal world arose, the responsibilities of various decision-makers, and the impacts of proposed policies, naive applications of ideal thinking can lead to misguided interventions. In this paper, we demonstrate a connection between the fair machine learning literature and the ideal approach in political philosophy, and argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles faced by the ideal approach. We conclude with a critical discussion of the harms of misguided solutions, a reinterpretation of impossibility results, and directions for future research.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (5 more...)
Ethics of Technology Needs More Political Philosophy
As a driver, have you ever asked yourself whether to make left turns? Unprotected left turns, that is, left turns with oncoming traffic, are among the most difficult and dangerous driving maneuvers. Although the risk of each individual left turn is negligible, if you are designing the behavior of a large fleet of self-driving cars, small individual risks add up to a significant number of expected injuries in the aggregate. Whether a fleet of cars should make left turns is a question that any developer of self-driving cars and any designer of mapping and routing applications faces today. A more general issue is at stake here: the decision of whether to make left turns involves a trade-off between safety and mobility (the time it takes to get to a destination).
- Transportation > Ground > Road (0.98)
- Transportation > Passenger (0.63)
Escaping the State of Nature: A Hobbesian Approach to Cooperation in Multi-agent Reinforcement Learning
Cooperation is a phenomenon that has been widely studied across many different disciplines. In the field of computer science, the modularity and robustness of multi-agent systems offer significant practical advantages over individual machines. At the same time, agents using standard reinforcement learning algorithms often fail to achieve long-term, cooperative strategies in unstable environments when there are short-term incentives to defect. Political philosophy, on the other hand, studies the evolution of cooperation in humans who face similar incentives to act individualistically, but nevertheless succeed in forming societies. Thomas Hobbes in Leviathan provides the classic analysis of the transition from a pre-social State of Nature, where consistent defection results in a constant state of war, to stable political community through the institution of an absolute Sovereign. This thesis argues that Hobbes's natural and moral philosophy are strikingly applicable to artificially intelligent agents and aims to show that his political solutions are experimentally successful in producing cooperation among modified Q-Learning agents. Cooperative play is achieved in a novel Sequential Social Dilemma called the Civilization Game, which models the State of Nature by introducing the Hobbesian mechanisms of opponent learning awareness and majoritarian voting, leading to the establishment of a Sovereign.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Leisure & Entertainment > Games (1.00)
- Law (1.00)
- Government (1.00)